42 research outputs found

    Evolutionary swarm robotics: a theoretical and methodological itinerary from individual neuro-controllers to collective behaviours

    Get PDF
    In the last decade, swarm robotics gathered much attention in the research community. By drawing inspiration from social insects and other self-organizing systems, it focuses on large robot groups featuring distributed control, adaptation, high robustness, and flexibility. Various reasons lay behind this interest in similar multi-robot systems. Above all, inspiration comes from the observation of social activities, which are based on concepts like division of labor, cooperation, and communication. If societies are organized in such a way in order to be more efficient, then robotic groups also could benefit from similar paradigms

    Neurological modeling of what experts vs. non-experts find interesting

    Get PDF
    The P3 and related ERP's have a long history of use to identify stimulus events in subjects as part of oddball-style experiments. In this work we describe the ongoing development of oddball style experiments which attempt to capture what a subject finds of interest or curious, when presented with a set of visual stimuli i.e. images. This joint work between Dublin City University (DCU) and the European Space Agency's Advanced Concepts Team (ESA ACT) is motivated by the challenges of autonomous space exploration where the time lag for sending data back to earth for analysis and then communicating an action or decision back to the spacecraft means that decision-making is slow. Also, when extraterrestrial sensors capture data, the determination of what data to send back to earth is driven by an expertly devised rule set, that is scientists need to determine apriori what will be of interest. This cannot adapt to novel or unexpected data that a scientist may find curious. Our work is attempting to determine if it is possible to capture what a scientist (subject) finds of interest (curious) in a stream of image data through EEG measurement. One of the our challenges is to determine the difference between an expert and a lay subject response to stimulus. To investigate the theorized difference, we use a set of lifelog images as our dataset. Lifelog images are first person images taken by a small wearable camera which continuously records images whilst it is worn. We have devised two key experiments for use with this data and two classes of subjects. Our subjects are a person who has worn the personal camera, from which our collection of lifelog images is taken and who becomes our expert, and the remaining subjects are people who have no association with the captured images. Our first experiment is a traditional oddball experiment where the oddballs are people having coffee, and can be thought of as a directed information seeking task. The second experiment is to present a stream of lifelog images to the subjects and record which images cause a stimulus response. Once the data from these experiments has been captured our task is to compare the responses between the expert and lay subject groups, to determine if there are any commonalities between these groups or any distinct differences. If the latter outcome is the case the objective is then to investigate methods for capturing properties of images which cause an expert to be interested in a presented image. Further novelty is added to our work by the fact we are using entry-level off-the-shelf EEG devices, consisting of 4 nodes with a sampling rate of 255Hz

    Curiosity cloning: neural analysis of scientific knowledge

    Get PDF
    Event-related potentials (ERPs) are indicators of brain activity related to cognitive processes. They can be de- tected from EEG signals and thus constitute an attractive non-invasive option to study cognitive information pro- cessing. The P300 wave is probably the most celebrated example of an event-related potential and it is classically studied in connection to the odd-ball paradigm experi- mental protocol, able to consistently provoke the brain wave. We propose the use of P300 detection to identify the scientific interest in a large set of images and train a computer with machine learning algorithms using the subject’s responses to the stimuli as the training data set. As a ïŹrst step, we here describe a number of experiments designed to relate the P300 brain wave to the cognitive processes related to placing a scientiïŹc judgment on a picture and to study the number of images per seconds that can be processed by such a system

    Implicit retrieval of salient images using brain computer interface

    Get PDF
    ABSTRACT Space missions are often equipped with several high definition sensors that can autonomously collect a potentially enormous amount of data. The bottleneck in retrieving these often precious datasets is the onboard data storing capability and the communication bandwidth, which limit the amount of data that can be sent back to Earth. In this paper, we propose a method based on the analysis of brain electrical activity to identify the scientific interest of experts towards a given image in a large set of images. Such a method can be used to efficiently create an abundant training set (images and whether they are scientifically interesting) with a considerably faster image presentation rate that can go beyond expert consciousness, with less interrogation time for experts and relatively high performance

    Implicit Retrieval Of Salient Images Using Brain Computer Interface

    Get PDF
    Space missions are often equipped with several high definition sensors that can autonomously collect a potentially enormous amount of data. The bottleneck in retrieving these often precious datasets is the onboard data storing capability and the communication bandwidth, which limit the amount of data that can be sent back to Earth. In this paper, we propose a method based on the analysis of brain electrical activity to identify the scientific interest of experts towards a given image in a large set of images. Such a method can be used to efficiently create an abundant training set (images and whether they are scientifically interesting) with a considerably faster image presentation rate that can go beyond expert consciousness, with less interrogation time for experts and relatively high performance

    Presentations and demos of the "Curiosity Cloning" project

    Get PDF
    This presentation showed work we have done on using Brain Computer Interfaces to detect scientific curiosity in images. The equipment used by the group from EPFL Lausanne was a full 32-node EEG while the CLARITY group from DCU used a home-built 4-channel EEG analyser with total cost less than €1,000. Results indicate that the P300 ERP is a positive indicator of curiosity, especially in experts

    On the evolution of autonomous decision-making and communication in collective robotics

    No full text
    In this thesis, we use evolutionary robotics techniques to automatically design and synthesisebehaviour for groups of simulated and real robots. Our contribution will be onthe design of non-trivial individual and collective behaviour; decisions about solitary orsocial behaviour will be temporal and they will be interdependent with communicativeacts. In particular, we study time-based decision-making in a social context: how theexperiences of robots unfold in time and how these experiences influence their interactionwith the rest of the group. We propose three experiments based on non-trivial real-worldcooperative scenarios. First, we study social cooperative categorisation; signalling andcommunication evolve in a task where the cooperation among robots is not a priori required.The communication and categorisation skills of the robots are co-evolved fromscratch, and the emerging time-dependent individual and social behaviour are successfullytested on real robots. Second, we show on real hardware evidence of the success of evolvedneuro-controllers when controlling two autonomous robots that have to grip each other(autonomously self-assemble). Our experiment constitutes the first fully evolved approachon such a task that requires sophisticated and fine sensory-motor coordination, and ithighlights the minimal conditions to achieve assembly in autonomous robots by reducingthe assumptions a priori made by the experimenter to a functional minimum. Third, wepresent the first work in the literature to deal with the design of homogeneous controlmechanisms for morphologically heterogeneous robots, that is, robots that do not sharethe same hardware characteristics. We show how artificial evolution designs individualbehaviours and communication protocols that allow the cooperation between robots ofdifferent types, by using dynamical neural networks that specialise on-line, depending onthe nature of the morphology of each robot. The experiments briefly described abovecontribute to the advancement of the state of the art in evolving neuro-controllers forcollective robotics both from an application-oriented, engineering point of view, as well asfrom a more theoretical point of view.Doctorat en Sciences de l'ingénieurinfo:eu-repo/semantics/nonPublishe

    Evolution of acoustic communication between two cooperating robots

    No full text
    In this paper we describe a model in which artificial evolution is employed to design neural mechanisms that control the motion of two autonomous robots required to communicate through sound to perform a common task. The results of this work are a "proof-of-concept": they demonstrate that evolution can exploit a very simple sound communication system, to design the mechanisms that allow the robots cooperate by employing acoustic interactions. The analysis of the evolved strategies uncover the basic properties of the communication protocol. © Springer-Verlag Berlin Heidelberg 2007.SCOPUS: cp.kinfo:eu-repo/semantics/publishe
    corecore